Goto

Collaborating Authors

 mitigation measure


Time series classification of satellite data using LSTM networks: an approach for predicting leaf-fall to minimize railroad traffic disruption

de Wilde, Hein, Alsahag, Ali Mohammed Mansoor, Blanchet, Pierre

arXiv.org Artificial Intelligence

Railroad traffic disruption as a result of leaf-fall cost the UK rail industry over 300 million per year and measures to mitigate such disruptions are employed on a large scale, with 1.67 million kilometers of track being treated in the UK in 2021 alone. Therefore, the ability to anticipate the timing of leaf-fall would offer substantial benefits for rail network operators, enabling the efficient scheduling of such mitigation measures. However, current methodologies for predicting leaf-fall exhibit considerable limitations in terms of scalability and reliability. This study endeavors to devise a prediction system that leverages specialized prediction methods and the latest satellite data sources to generate both scalable and reliable insights into leaf-fall timings. An LSTM network trained on ground-truth leaf-falling data combined with multispectral and meteorological satellite data demonstrated a root-mean-square error of 6.32 days for predicting the start of leaf-fall and 9.31 days for predicting the end of leaf-fall. The model, which improves upon previous work on the topic, offers promising opportunities for the optimization of leaf mitigation measures in the railway industry and the improvement of our understanding of complex ecological systems.


From Safety Standards to Safe Operation with Mobile Robotic Systems Deployment

Belzile, Bruno, Wanang-Siyapdjie, Tatiana, Karimi, Sina, Braga, Rafael Gomes, Iordanova, Ivanka, St-Onge, David

arXiv.org Artificial Intelligence

Mobile robotic systems are increasingly used in various work environments to support productivity. However, deploying robots in workplaces crowded by human workers and interacting with them results in safety challenges and concerns, namely robot-worker collisions and worker distractions in hazardous environments. Moreover, the literature on risk assessment as well as the standard specific to mobile platforms is rather limited. In this context, this paper first conducts a review of the relevant standards and methodologies and then proposes a risk assessment for the safe deployment of mobile robots on construction sites. The approach extends relevant existing safety standards to encompass uncovered scenarios. Safety recommendations are made based on the framework, after its validation by field experts.


A Frontier AI Risk Management Framework: Bridging the Gap Between Current AI Practices and Established Risk Management

Campos, Simeon, Papadatos, Henry, Roger, Fabien, Touzet, Chloé, Murray, Malcolm, Quarks, Otter

arXiv.org Artificial Intelligence

The recent development of powerful AI systems has highlighted the need for robust risk management frameworks in the AI industry. Although companies have begun to implement safety frameworks, current approaches often lack the systematic rigor found in other high-risk industries. This paper presents a comprehensive risk management framework for the development of frontier AI that bridges this gap by integrating established risk management principles with emerging AI-specific practices. The framework consists of four key components: (1) risk identification (through literature review, open-ended red-teaming, and risk modeling), (2) risk analysis and evaluation using quantitative metrics and clearly defined thresholds, (3) risk treatment through mitigation measures such as containment, deployment controls, and assurance processes, and (4) risk governance establishing clear organizational structures and accountability. Drawing from best practices in mature industries such as aviation or nuclear power, while accounting for AI's unique challenges, this framework provides AI developers with actionable guidelines for implementing robust risk management. The paper details how each component should be implemented throughout the life-cycle of the AI system - from planning through deployment - and emphasizes the importance and feasibility of conducting risk management work prior to the final training run to minimize the burden associated with it.


Effective Mitigations for Systemic Risks from General-Purpose AI

Uuk, Risto, Brouwer, Annemieke, Schreier, Tim, Dreksler, Noemi, Pulignano, Valeria, Bommasani, Rishi

arXiv.org Artificial Intelligence

The systemic risks posed by general-purpose AI models are a growing concern, yet the effectiveness of mitigations remains underexplored. Previous research has proposed frameworks for risk mitigation, but has left gaps in our understanding of the perceived effectiveness of measures for mitigating systemic risks. Our study addresses this gap by evaluating how experts perceive different mitigations that aim to reduce the systemic risks of general-purpose AI models. We surveyed 76 experts whose expertise spans AI safety; critical infrastructure; democratic processes; chemical, biological, radiological, and nuclear risks (CBRN); and discrimination and bias. Among 27 mitigations identified through a literature review, we find that a broad range of risk mitigation measures are perceived as effective in reducing various systemic risks and technically feasible by domain experts. In particular, three mitigation measures stand out: safety incident reports and security information sharing, third-party pre-deployment model audits, and pre-deployment risk assessments. These measures show both the highest expert agreement ratings (>60\%) across all four risk areas and are most frequently selected in experts' preferred combinations of measures (>40\%). The surveyed experts highlighted that external scrutiny, proactive evaluation and transparency are key principles for effective mitigation of systemic risks. We provide policy recommendations for implementing the most promising measures, incorporating the qualitative contributions from experts. These insights should inform regulatory frameworks and industry practices for mitigating the systemic risks associated with general-purpose AI.


Interview with Katherine Mayo: An agent-based analysis of real-time payments and fraud risk mitigation

AIHub

In their paper Fraud Risk Mitigation in Real-Time Payments: A Strategic Agent-Based Analysis, Katherine Mayo, Nicholas Grabill and Michael Wellman consider real-time payments, and employ an agent-based model to investigate potential strategies for banks in the face of fraud. We asked Katherine about this work, why it is an important topic, and how the team went about tackling the problem. Payments generally adhere to the following sequence of steps: initiation by the sender, processing by the bank, and then final funds are released to the receiver. The standard debit or credit transactions most people are familiar with often suffer delays in their processing of one or more days. However, recent advancements in technology have allowed for the introduction of a new, faster payment type boasting drastic decreases in processing times.


Fortify Your Defenses: Strategic Budget Allocation to Enhance Power Grid Cybersecurity

Meyur, Rounak, Purohit, Sumit, Webb, Braden K.

arXiv.org Artificial Intelligence

The abundance of cyber-physical components in modern day power grid with their diverse hardware and software vulnerabilities has made it difficult to protect them from advanced persistent threats (APTs). An attack graph depicting the propagation of potential cyber-attack sequences from the initial access point to the end objective is vital to identify critical weaknesses of any cyber-physical system. A cyber security personnel can accordingly plan preventive mitigation measures for the identified weaknesses addressing the cyber-attack sequences. However, limitations on available cybersecurity budget restrict the choice of mitigation measures. We address this aspect through our framework, which solves the following problem: given potential cyber-attack sequences for a cyber-physical component in the power grid, find the optimal manner to allocate an available budget to implement necessary preventive mitigation measures. We formulate the problem as a mixed integer linear program (MILP) to identify the optimal budget partition and set of mitigation measures which minimize the vulnerability of cyber-physical components to potential attack sequences. We assume that the allocation of budget affects the efficacy of the mitigation measures. We show how altering the budget allocation for tasks such as asset management, cybersecurity infrastructure improvement, incident response planning and employee training affects the choice of the optimal set of preventive mitigation measures and modifies the associated cybersecurity risk. The proposed framework can be used by cyber policymakers and system owners to allocate optimal budgets for various tasks required to improve the overall security of a cyber-physical system.